skip to main content


Search for: All records

Creators/Authors contains: "Xu, Zhuangying"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. We present V.Ra, a visual and spatial programming system for robot-IoT task authoring. In V.Ra, programmable mobile robots serve as binding agents to link the stationary IoTs and perform collaborative tasks. We establish an ecosystem that coherently connects the three key elements of robot task planning , the human, robot and IoT, with one single mobile AR device. Users can perform task authoring with the Augmented Reality (AR) handheld interface, then placing the AR device onto the mobile robot directly transfers the task plan in a what-you-do-is-what-robot-does (WYDWRD) manner. The mobile device mediates the interactions between the user, robot, and the IoT oriented tasks, and guides the path planning execution with the embedded simultaneous localization and mapping (SLAM) capability. We demonstrate that V.Ra enables instant, robust and intuitive room-scale navigatory and interactive task authoring through various use cases and preliminary studies. 
    more » « less
  2. We present V.Ra, a visual and spatial programming system for robot-IoT task authoring. In V.Ra, programmable mobile robots serve as binding agents to link the stationary IoTs and perform collaborative tasks. We establish an ecosystem that coherently connects the three key elements of robot task planning (human-robot-IoT) with one single AR-SLAM device. Users can perform task authoring in an analogous manner with the Augmented Reality (AR) interface. Then placing the device onto the mobile robot directly transfers the task plan in a what-you-do-is-what-robot-does (WYDWRD) manner. The mobile device mediates the interactions between the user, robot and IoT oriented tasks, and guides the path planning execution with the SLAM capability. 
    more » « less
  3. null (Ed.)
    Ani-Bot is a modular robotics system that allows users to control their DIY robots using Mixed-Reality Interaction (MRI). This system takes advantage of MRI to enable users to visually program the robot through the augmented view of a Head-Mounted Display (HMD). In this paper, we first explain the design of the Mixed-Reality (MR) ready modular robotics system, which allows users to instantly perform MRI once they finish assembling the robot. Then, we elaborate the augmentations provided by the MR system in the three primary phases of a construction kit's lifecycle: Creation, Tweaking, and Usage. Finally, we demonstrate Ani-Bot with four application examples and evaluate the system with a two-session user study. The results of our evaluation indicate that Ani-Bot does successfully embed MRI into the lifecycle (Creation, Tweaking, Usage) of DIY robotics and that it does show strong potential for delivering an enhanced user experience. 
    more » « less
  4. The emerging simultaneous localizing and mapping (SLAM) based tracking technique allows the mobile AR device spatial awareness of the physical world. Still, smart things are not fully supported with the spatial awareness in AR. Therefore, we present Scenariot, a method that enables instant discovery and localization of the surrounding smart things while also spatially registering them with a SLAM based mobile AR system. By exploiting the spatial relationships between mobile AR systems and smart things, Scenariot fosters in-situ interactions with connected devices. We embed Ultra-Wide Band (UWB) RF units into the AR device and the controllers of the smart things, which allows for measuring the distances between them. With a one-time initial calibration, users localize multiple IoT devices and map them within the AR scenes. Through a series of experiments and evaluations, we validate the localization accuracy as well as the performance of the enabled spatial aware interactions. Further, we demonstrate various use cases through Scenariot. 
    more » « less
  5. We present Ani-Bot, a modular robotics system that allows users to construct Do-It-Yourself (DIY) robots and use mixed-reality approach to interact with them. Ani-Bot enables novel user experience by embedding Mixed-Reality Interaction (MRI) in the three phases of interacting with a modular construction kit, namely, Creation, Tweaking, and Usage. In this paper, we first present the system design that allows users to instantly perform MRI once they finish assembling the robot. Further, we discuss the augmentations offered by MRI in the three phases in specific. 
    more » « less